On Learning Interpreted Languages with Recurrent Models

نویسندگان

چکیده

Abstract Can recurrent neural nets, inspired by human sequential data processing, learn to understand language? We construct simplified sets reflecting core properties of natural language as modeled in formal syntax and semantics: recursive syntactic structure compositionality. find LSTM GRU networks generalize compositional interpretation well, but only the most favorable learning settings, with a well-paced curriculum, extensive training data, left-to-right (but not right-to-left) composition.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parsing Minimalist Languages with Interpreted Regular Tree Grammars

Minimalist Grammars (MGs) (Stabler, 1997) are a formalisation of Chomsky’s minimalist program (Chomsky, 1995), which currently dominates much of mainstream syntax. MGs are simple and intuitive to work with, and are mildly context sensitive (Michaelis, 1998), putting them in the right general class for human language (Joshi, 1985).1 Minimalist Grammars are known to be more succinct than their Mu...

متن کامل

An architecture for interpreted dynamic object-oriented languages

of the thesis has been supplied on the condition that anyone who consults it is understood to recognise that its copyright rests with its author and that no quotation from the thesis and no information derived from it may be published without the prior written consent of the author. This thesis may be made available for consultation within the University Library and may be photocopied or lent t...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Linguistics

سال: 2022

ISSN: ['1530-9312', '0891-2017']

DOI: https://doi.org/10.1162/coli_a_00431